2 research outputs found
Secure Federated Learning against Model Poisoning Attacks via Client Filtering
Given the distributed nature, detecting and defending against the backdoor
attack under federated learning (FL) systems is challenging. In this paper, we
observe that the cosine similarity of the last layer's weight between the
global model and each local update could be used effectively as an indicator of
malicious model updates. Therefore, we propose CosDefense, a
cosine-similarity-based attacker detection algorithm. Specifically, under
CosDefense, the server calculates the cosine similarity score of the last
layer's weight between the global model and each client update, labels
malicious clients whose score is much higher than the average, and filters them
out of the model aggregation in each round. Compared to existing defense
schemes, CosDefense does not require any extra information besides the received
model updates to operate and is compatible with client sampling. Experiment
results on three real-world datasets demonstrate that CosDefense could provide
robust performance under the state-of-the-art FL poisoning attack
Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning
Federated Learning (FL) has gained significant attraction due to its ability
to enable privacy-preserving training over decentralized data. Current
literature in FL mostly focuses on single-task learning. However, over time,
new tasks may appear in the clients and the global model should learn these
tasks without forgetting previous tasks. This real-world scenario is known as
Continual Federated Learning (CFL). The main challenge of CFL is Global
Catastrophic Forgetting, which corresponds to the fact that when the global
model is trained on new tasks, its performance on old tasks decreases. There
have been a few recent works on CFL to propose methods that aim to address the
global catastrophic forgetting problem. However, these works either have
unrealistic assumptions on the availability of past data samples or violate the
privacy principles of FL. We propose a novel method, Federated Orthogonal
Training (FOT), to overcome these drawbacks and address the global catastrophic
forgetting in CFL. Our algorithm extracts the global input subspace of each
layer for old tasks and modifies the aggregated updates of new tasks such that
they are orthogonal to the global principal subspace of old tasks for each
layer. This decreases the interference between tasks, which is the main cause
for forgetting. We empirically show that FOT outperforms state-of-the-art
continual learning methods in the CFL setting, achieving an average accuracy
gain of up to 15% with 27% lower forgetting while only incurring a minimal
computation and communication cost